Limitations of One-Hidden-Layer Perceptron Networks
نویسنده
چکیده
Limitations of one-hidden-layer perceptron networks to represent efficiently finite mappings is investigated. It is shown that almost any uniformly randomly chosen mapping on a sufficiently large finite domain cannot be tractably represented by a one-hidden-layer perceptron network. This existential probabilistic result is complemented by a concrete example of a class of functions constructed using quasi-random sequences. Analogies with central paradox of coding theory and no free lunch theorem are discussed.
منابع مشابه
Bounds on Sparsity of One-Hidden-Layer Perceptron Networks
Limitations of one-hidden-layer (shallow) perceptron networks to sparsely represent multivariable functions is investigated. A concrete class of functions is described whose computation by shallow perceptron networks requires either large number of units or is unstable due to large output weights. The class is constructed using pseudo-noise sequences which have many features of random sequences...
متن کاملPredicting the Grouting Ability of Sandy Soils by Artificial Neural Networks Based On Experimental Tests
In this paper, the grouting ability of sandy soils is investigated by artificial neural networks based on the results of chemical grout injection tests. In order to evaluate the soil grouting potential, experimental samples were prepared and then injected. The sand samples with three different particle sizes (medium, fine, and silty) and three relative densities (%30, %50, and %90) were injecte...
متن کاملPrediction of breeding values for the milk production trait in Iranian Holstein cows applying artificial neural networks
The artificial neural networks, the learning algorithms and mathematical models mimicking the information processing ability of human brain can be used non-linear and complex data. The aim of this study was to predict the breeding values for milk production trait in Iranian Holstein cows applying artificial neural networks. Data on 35167 Iranian Holstein cows recorded between 1998 to 2009 were ...
متن کامل4 . Multilayer perceptrons and back - propagation
Multilayer feed-forward networks, or multilayer perceptrons (MLPs) have one or several " hidden " layers of nodes. This implies that they have two or more layers of weights. The limitations of simple perceptrons do not apply to MLPs. In fact, as we will see later, a network with just one hidden layer can represent any Boolean function (including the XOR which is, as we saw, not linearly separab...
متن کاملProperties of feedforward neural networks
In his seminal paper Cover used geometrical arguments to compute the probabilityofseparatingtwosetsof patterns witha perceptron. We extend these ideastofeedfonvard networks with hidden layers. There are intrinsic limitations to the number of patterns that a net of this kind can separate and we find quantitative bounds valid far any net with d input and h hidden neurons.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015